-
Notifications
You must be signed in to change notification settings - Fork 25.5k
Implement move non preferred phase in allocator #134429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement move non preferred phase in allocator #134429
Conversation
…n_preferred_iteration
Decision canRemain = allocation.deciders().canRemain(shardRouting, routingNode, allocation); | ||
if ((canRemain.type() == Type.NOT_PREFERRED || canRemain.type() == Type.NO) == false) { | ||
return MoveDecision.remain(canRemain); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will consider NO
and NOT_PREFERRED
here, because it may be that a NO
is really a NOT_PREFERRED
that's also a NO
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite follow here. I'd appreciate if you could help me understand it better. Do you mean the decision could be an overall NO
because some other decider may say NO
while the writeLoad decider says NOT_PREFERRED
? Since we run moveShards
first, do we still need to consider NO
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Allocation#withDeciders
we return the "most negative" decision, which when one decider says NOT_PREFERRED
and another says NO
will be NO
. Because we're iterating in the order of most-desirable-to-move first, if we see either of these values returned it makes sense to assume there was a NOT_PREFERRED
in there and make the move anyway. The alternative would be to assume there was no NOT_PREFERRED
when there is a NO
and potentially moving a less-preferred shard.
This will come into play now, as @DiannaHohensee and I discussed this morning it's probably better to run moveNotPreferred
first because otherwise we risk moving a sub-optimal shard when NO
and NOT_PREFERRED
intersect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate on that argument about moving non-preferred first. I would naively think we want to ensure we move all hard-rules first - to vacate nodes - and then move the non-preferred after.
I think that also avoids this slightly confusing check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll not that moveShards
should try to move shards to places where canAllocate
says YES over places where it says NOT_PREFERRED. Which seems to solve the sub-optimal shard movement issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate on that argument about moving non-preferred first.
The ShardMovementWriteLoadSimulator
will simulate the end of a hot-spot as soon as a single shard leaves the node that is hot-spotting. So if moveShards runs first, it could eliminate the hot-spot before we reach moveNonPreferred and have the opportunity to select a sensible shard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would naively think we want to ensure we move all hard-rules first - to vacate nodes - and then move the non-preferred after.
The Balanced/DesiredBalanceShardsAllocators do not pick order of shard movement. The Reconciler does that -- the allocator and reconciler happen to have the same order, but I think the priority for the allocator is to make the best choices, not consider shard movement priority. The reconciler behavior is actually in my balancer changes patch.
The exception is allocateUnassigned for primaries, for which there's an early exit from the allocators to publish the DesiredBalance ASAP.
* @return An iterator containing shards we'd like to move to a preferred allocation | ||
*/ | ||
Iterator<ShardRouting> createNonPreferredShardIterator(RoutingAllocation allocation); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also just use Function<RoutingAllocation, Iterable<ShardRouting>>
but having a specific interface gives us somewhere to put the documentation and NOOP
implementation? Not particularly tied to the existence of this interface, but that's why I've put it here.
...ain/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java
Outdated
Show resolved
Hide resolved
boolean movedAShard; | ||
do { | ||
// Any time we move a shard, we need to update the cluster info and ask again for the non-preferred shards | ||
// as they may have changed | ||
movedAShard = false; | ||
for (Iterator<ShardRouting> nonPreferredShards = nonPreferredShardIteratorFactory.createNonPreferredShardIterator( | ||
allocation | ||
); nonPreferredShards.hasNext();) { | ||
if (tryMoveShardIfNonPreferred(nonPreferredShards.next())) { | ||
movedAShard = true; | ||
break; | ||
} | ||
} | ||
// TODO: Update cluster info | ||
} while (movedAShard); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe return Iterable, then whole thing would be:
for (var shard : nonPreferredIterable(allocation)) {
if (tryMoveShardIfNonPreferred(shard) {
return;
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code here is what the ultimate solution should look like, where we are able to do multiple moves in a single allocate call. We stop iterating when no moves are made, but each time we make a move we refresh the shard iterator, because we may have resolved a hot-spot, which could change the order or contents of the list.
For example, if there are two hot-spotted nodes (N
and M
), the first time we call for the iterator it will be:
N1, N2, N3, M1, M2, M3, ...
then if we successfully move N2
and it resolve the hot-spot we'll ask again and get
M1, M2, M3, ...
Hence the nested loop, but true, the inner loop could be tidier with an Iterable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switched to Iterable
in cebf3a9
hotSpottedNodes.add(new NodeShardIterable(allocation, node, writeThreadPoolStats.maxThreadPoolQueueLatencyMillis())); | ||
} | ||
} | ||
return new NodeShardIterator(hotSpottedNodes.iterator()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hotSpottedNodes seems all nodes with stats available, not necessarily hot spot
missing maxQueueLatency threshold check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No this is intentional and perhaps more of a naming issue, ideally I think this iterator factory just produces the iterator and doesn't do any filtering at all (that's the job of the deciders). It just returns shards in an order where the most desirable to move are presented first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realise I am doing some filtering by excluding nodes with no utilisation and shards with no write load, but that goes to my earlier comment about it being over-fitted to the write load use case. We can change that if things change, but currently there's no sense in investigating those shards.
private Iterator<ShardRouting> createShardIterator() { | ||
final var shardWriteLoads = allocation.clusterInfo().getShardWriteLoads(); | ||
final List<ShardRouting> sortedRoutings = new ArrayList<>(); | ||
double totalWriteLoad = 0; | ||
for (ShardRouting shard : routingNode) { | ||
Double shardWriteLoad = shardWriteLoads.get(shard.shardId()); | ||
if (shardWriteLoad != null) { | ||
sortedRoutings.add(shard); | ||
totalWriteLoad += shardWriteLoad; | ||
} | ||
} | ||
// TODO: Work out what this order should be | ||
// Sort by distance-from-mean-write-load | ||
double meanWriteLoad = totalWriteLoad / sortedRoutings.size(); | ||
sortedRoutings.sort(Comparator.comparing(sr -> Math.abs(shardWriteLoads.get(sr.shardId()) - meanWriteLoad))); | ||
return sortedRoutings.iterator(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I recall correctly, Henning mentioned picking a shard somewhere in the middle. I think we dont need sort (strong order) but a set of average shards. For example create two partitions - preferable and not. Everything that 0.5-0.8 of maxShardLoad goes to preferable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we'd like to do that and I find that slightly harder with returning a list (though possibly doable).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about this, an iterator that returns average shards first, if nothing worked then heavy shards(>0.8), then light (<0.5). In worst case traverse shards 4 times: find max load, then any average, then any heavy, then light.
private Stream<ShardRouting> shardsStream(){
return StreamSupport.stream(routingNode.spliterator(),false);
}
...
var maxLoad = shardsStream().mapToDouble(ShardRouting::load).max().orElse(1.0);
var avg = shardsStream().filter(s -> s.load() / maxLoad >= 0.5 && s.load() / maxLoad <= 0.8);
var heavy = shardsStream().filter(s -> s.load() / maxLoad > 0.8);
var light = shardsStream().filter(s -> s.load() / maxLoad < 0.5);
return concat(concat(avg, heavy), light).iterator(); //Stream.concat()
PS there is no ShardRouting::load, used here for brevity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Attempted in de052a3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:'-) ohboi. I would still opt-in for lazy sequence, rather than sorted list. I believe expected case is to have some average shards to move, hence allocating and sorting list seems redundant, a single pass with filter should suffice. Especially in context of 10k shards node, there is high probability of having a good average shard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, good point, made lazier in d59ea2b
Also didn't bother sorting inside the low/medium/high ranges but can easily add that if we think its worth it for the determinism.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a spin through and left some thoughts.
); | ||
balancer.allocateUnassigned(); | ||
balancer.moveShards(); | ||
balancer.moveNonPreferred(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are your thoughts on this ordering? I figured we'd need to run the new logic before moveShards, since moveShards could trigger the simulator to consider the hot-spot addressed, before we check in moveNonPreferred.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it depends on whether you think fixing hot-spots or shutting down nodes is more important. The naming would suggest it's more important to move canRemain=NO
than canRemain=NOT_PREFERRED
shards, therefore moveShards
should get first priority at movement, but like you say because moveNotPreferred
prioritises movements, we may make sub-optimal moves in the event of an intersection between NO
and NOT_PREFERRED
. If there is an intersection, that would (most likely) suggest currently that there is a shutting down node that is also hot-spotting. In which case we have to evacuate all the shards either way.
I don't have strong preference here because I hope it's rare enough to not matter, I'm inclined to follow the naming and prioritise moving NO
s, perhaps we need to apply our prioritisation there too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this currently breaks the shard moves simulator, can we run moveNonPreferred before moveShards? Otherwise we could have eliminated the hot-spot in the simulator, with moveShard shard relocations, by the time the code gets here.
I don't think the ordering will have a significant impact on shard movement. Especially since, to fix a hot-spot, we're moving a single shard per node per 30 second stats refresh cycle: I'd expect it to be irrelevant noise compared to the number of shards moved away from a shutting down node. This is also the allocator, the decisions we make here don't affect the order of shard movement. You'd have to do something with the Reconciler if you wanted to affect shard movement order.
...ain/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java
Outdated
Show resolved
Hide resolved
} | ||
|
||
private boolean moveASingleNonPreferredShard() { | ||
for (ShardRouting shardRouting : nonPreferredShardIteratorFactory.createNonPreferredShardIterator(allocation)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I expect you might be trying to be too generic. We know that we're dealing with the write load decider, and every shard will return not-preferred when there is a hot-spot. So we want a list of shards on a particular node (that's hot-spotting) ordered by write load estimate.
Not a concrete suggestion, rather a general thought on the implementation approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was more explicit about that in my first attempt at this, but I don't think the approach was well received as it's something of a departure from the way deciders work currently.
What you suggest does actually happen in the DefaultNonPreferredShardIteratorFactory
, up-front we build a list of nodes ordered by queue latency, we then return the shards from those nodes ordered by preference-for-moving. At this stage it's distance from mean write load, but that's subject to change.
I build this list lazily because the hope is we don't have to iterate too far through it to find a shard that's movable.
So the IteratorFactory
interface is generic, but the default implementation is very tailored to what we know about the hot-spot decider.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't taken a good look at your first attempt. My first thought would be to avoid using the deciders until we've got a list of ordered shards for a hot node to try to relocate. Start by filtering down to the nodes exceeding the queue latency threshold, discard the other nodes. Then the allocation deciders only come into play to select a new node assignment.
We would be ignoring the WriteLoadDecider's canRemain method... It's not obvious to me how to not ignore it 🤔 To move moveNonPreferred before moveShards, we'd have NO answers covering NOT_PREFERRED answers, which is another problem with using canRemain.
up-front we build a list of nodes ordered by queue latency
We only need to look at hot-spotting nodes, and there's no need to create a relative order for the nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my initial thought would be to run through all the nodes, call canRemain
on all shards from that node and collect those with NOT_PREFERRED result that have a YES result elsewhere. Then call the strategy to pick the one shard to move.
I think we've discussed this, but maybe it was discarded?
I think this prepares us better for multiple deciders saying not-preferred.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry @henningandersen I did deviate a little from what we discussed, but thinking only from an optimisation perspective, I figured it would be conceptually the same structure.
i.e my implementation does
- build iterator according to prioritisation logic
- work our way down it calling
canRemain
/decideMove
to determine the first one that can move and then move it - repeat
only because, if I understand correctly, you've advocated for
- call
canRemain
/decideMove
to determine the set of shards that want to move, and can move - pick one using the prioritisation logic and move it
- repeat
My thinking was that the latter approach would do loads of work up front (e.g. in a cluster with ~10,000 shards on each of multiple hot-spotted nodes) only to then move a single shard. The decideMove
logic is ~O(n^2)
, whereas the prioritisation logic is almost certainly cheaper than that (just a sort) and currently able to be performed lazily one node at a time.
I think you mentioned there may be cases where we could implement special logic if we knew the full set of shards that were moveable in that prioritisation logic, but it seems to me we should defer that cost until we identify some such scenarios?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my initial thought would be to run through all the nodes, call canRemain on all shards from that node and collect those with NOT_PREFERRED result that have a YES result elsewhere. Then call the strategy to pick the one shard to move.
If a node is write load hot-spotting, then canRemain
will return NOT_PREFERRED for every shard because queue_latency > queue_latency_threshold
will always be true. Any not-preferred decider will run on node-level resources, that return the same canRemain answer for all shards, I think?
canAllocate YES sounds like a nice filter.
I think this prepares us better for multiple deciders saying not-preferred.
I can't see a way for the balancer not to know about individual deciders for not-preferred / hotspots. Suppose the heap usage returned not-preferred (it doesn't, but for sake of discussion).
If the balancer checks all the deciders for canRemain NOT_PREFERRED, and finds a hot-spot, we move on to correcting the hot-spot. However, to correct the hot-spot, we need to know which resource is hot spotting because the shard order prioritization will be different for write load vs heap usage.
I think the balancer needs to know about individual deciders to address hot-spots, in order to prioritize the shards for relocation. Alternatively, a decider would need to be responsible for providing a strategy for ordering shards -- the AllocationDeciders would return a list of strategies, and the balancer runs a strategy per resource hot-spot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the main case I want to add next is the index anti-affinity and there I think the strategy of picking a relevant loaded shard of the candidates is still good. But I agree we may want a more advanced strategy. It could however also look at the base data again, determining out of the moveable shards which one to pick based on the known dimensions. That could be as simple as "if the node has a queue latency go by write-load, otherwise pick one, does not matter which (some determinism may be preferable though)".
} | ||
|
||
private Iterator<ShardRouting> createShardIterator() { | ||
final var shardWriteLoads = allocation.clusterInfo().getShardWriteLoads(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like you’re creating a list of shards across all nodes. I wonder if instead, we could first collect a list of nodes that are hot spotting, then create separate lists of shards (with their write loads, skip any shards with 0 load) for each hot spotting node from the allocation.clusterInfo().getShardWriteLoads(), and finally sort and iterate each shard list in the order we prefer, checking whether we can move each shard until we find one that’s movable for each node. Still need an iterator to sort and manage a list of shards, but it might be simpler just iterating at that level? Then the nodes don't need iterators.
We only want to move one shard per node. Not obvious to me how to easily achieve that when iterating all shards at once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach here is it's an iterator that returns the shards we'd like to move next, in order of preference. Once we move a shard we ask again for this list. We have to do this because every time we move a shard it can change the list of shards we want to move (e.g. if a shard movement resolves a hot-spot, the shards from that node might appear further down the list in the subsequent iterator, and a lesser-hot-spotted node might appear at the front of it instead).
I tried to not do any filtering here, because it's supposed to be the prioritisation logic, where the deciders themselves decide whether we canRemain
(it would seem to be duplicating logic to do it also here).
If we go through one of these iterators and don't find any shard we want to move, we break out of the loop and continue to balancing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to do this because every time we move a shard it can change the list of shards we want to move
But moving a single shard resolves the hot spot. Even if we move one shard off of a NodeA, the priority order for further shards to move away from NodeA shouldn't be dynamic 🤔
if a shard movement resolves a hot-spot, the shards from that node might appear further down the list in the subsequent iterator, and a lesser-hot-spotted node might appear at the front of it instead
IIUC, you're trying to fairly spread node hot-spot resolution? Like pick a shard for NodeA, then pick a shard for NodeB, before coming back to NodeA. I don't think that matters for the allocator, which comes up with the final allocation, not the plan for which shards to move first. NodeA is hot-spotting, and we can focus on NodeA's shards to resolve the hot spot, before moving on the NodeB's shards. We wouldn't be assigning any of NodeA or NodeB's shards to NodeA or NodeB because they are hot / not-preferred, so there's no interaction there, and no need for evenness / fairness in selection order.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, you're trying to fairly spread node hot-spot resolution?
No, as discussed on zoom the iterator represents our preference for the next move. e.g. if there are three nodes (M, N, O)
with queue latencies (100, 50, 0)
the shards will be iterated in the order
M1, M2, M3, M4, N1, N2, N3, O1, O2
where Mx
denotes the shard on node M
that is the x
th most desirable to move.
So we'll iterate through that list finding the first of those shards that can move somewhere, then execute the move, then we'll ask for that list again in the next iteration.
Say we moved a shard from M
to O
and now our latencies for (M, N, O)
are (0, 50, 0)
, the next iterator will look like
N1, N2, N3, M1, M2, O1, O2, O3
because N is the most likely to be hot-spotted, so it goes to the front of the list
Then we move a shard off of N
and the new latencies change to (M, N, O) = (0, 0, 0)
Then the iterator would look something like (although M, N, O could be in any order because they're all equal):
N1, N2, M1, M2, O1, O2, O3, O4
Which we'd iterate through and find no shard with canRemain = NOT_PREFERRED
so we'd make no movements and move on to the next phase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This ties in with my prior comment about actually calling canRemain
first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a theme I see is the idea of trying to prioritize movements. But the allocator does not control that. The Reconciler controls move prioritization. The exception is unassigned shards, which has an early exit out of the DesiredBalanceShardsAllocator. Otherwise, in the existing code, it makes sense to fix the NO decisions of the allocators before rebalancing while continuing to obey NO decisions: addressing NOs after rebalancing would potentially unbalance the cluster, undoing that work. From this perspective, we just need moveNonPreferred to be in a place that it can run properly.
It’d be great to verbally discuss my comment threads with you, when you have time. To fast track the thread resolution.
...ain/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java
Outdated
Show resolved
Hide resolved
} | ||
|
||
private Iterator<ShardRouting> createShardIterator() { | ||
final var shardWriteLoads = allocation.clusterInfo().getShardWriteLoads(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to do this because every time we move a shard it can change the list of shards we want to move
But moving a single shard resolves the hot spot. Even if we move one shard off of a NodeA, the priority order for further shards to move away from NodeA shouldn't be dynamic 🤔
if a shard movement resolves a hot-spot, the shards from that node might appear further down the list in the subsequent iterator, and a lesser-hot-spotted node might appear at the front of it instead
IIUC, you're trying to fairly spread node hot-spot resolution? Like pick a shard for NodeA, then pick a shard for NodeB, before coming back to NodeA. I don't think that matters for the allocator, which comes up with the final allocation, not the plan for which shards to move first. NodeA is hot-spotting, and we can focus on NodeA's shards to resolve the hot spot, before moving on the NodeB's shards. We wouldn't be assigning any of NodeA or NodeB's shards to NodeA or NodeB because they are hot / not-preferred, so there's no interaction there, and no need for evenness / fairness in selection order.
balancerSettings, | ||
writeLoadForecaster, | ||
balancingWeightsFactory, | ||
nonPreferredShardIteratorFactory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than passing the factory implementation through the BalancedShardsAllocator and Balancer constructors, could we directly add the logic to the Balancer in the first place? Avoid the factory. The other objects passed through the constructors are usually shared with other components, whereas the new logic only runs in the Balancer.
The moveNonPreferred could be gated by the WRITE_LOAD_DECIDER_ENABLED_SETTING. An alternative to the NOOP implementation. I don’t think tests would even be able to exercise moveNonPreferred without some hot-spot mocking to get to a 5 second queue latency, even if the new logic were enabled by default.
Though perhaps there was some other reason for the NOOP / adding it here that I'm missing. Factories seem to come into play often for stateful vs stateless impls, but we don't have an alternative real implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea here is just to put a boundary on the responsibilities of the two classes, the BalancedShardsAllocator
doesn't care about the iteration order of the shards - as long as the iterator contains all the shards this logic will work.
Similarly to how the BalancedShardsAllocator
doesn't care what the individual deciders do, it just knows about YES/NO/THROTTLE/NOT_PREFERRED
.
In my opinion the interface delineates responsibilities, and allows the reader to not concern themselves with the implementation details of the iteration order when grok-ing the BalancedShardsAllocator
. It also frees us up to bake in all kinds of knowledge about the configured deciders into the our implementation without that knowledge leaking into the BalancedShardsAllocator
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default implementation could equally be
allocation -> allocation.routingNodes().nodeInterleavedShardIterator()
} | ||
|
||
private boolean moveASingleNonPreferredShard() { | ||
for (ShardRouting shardRouting : nonPreferredShardIteratorFactory.createNonPreferredShardIterator(allocation)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't taken a good look at your first attempt. My first thought would be to avoid using the deciders until we've got a list of ordered shards for a hot node to try to relocate. Start by filtering down to the nodes exceeding the queue latency threshold, discard the other nodes. Then the allocation deciders only come into play to select a new node assignment.
We would be ignoring the WriteLoadDecider's canRemain method... It's not obvious to me how to not ignore it 🤔 To move moveNonPreferred before moveShards, we'd have NO answers covering NOT_PREFERRED answers, which is another problem with using canRemain.
up-front we build a list of nodes ordered by queue latency
We only need to look at hot-spotting nodes, and there's no need to create a relative order for the nodes.
...ain/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java
Outdated
Show resolved
Hide resolved
...ain/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few initial comments. Maybe we need to POC the direction of doing canRemain
first in a separate PR to figure out the direction (or maybe I am the only one thinking that is how it should work)?
collectAndRecordNodeWeightStats(balancer, balancingWeights, allocation); | ||
try { | ||
balancer.allocateUnassigned(); | ||
if (balancer.moveNonPreferred()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should go after moveShards
(but before balance)? It seems more important to use the incoming recovery budget on a target node for handling shutting down nodes or other rules than non-preference?
} | ||
|
||
private boolean moveASingleNonPreferredShard() { | ||
for (ShardRouting shardRouting : nonPreferredShardIteratorFactory.createNonPreferredShardIterator(allocation)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my initial thought would be to run through all the nodes, call canRemain
on all shards from that node and collect those with NOT_PREFERRED result that have a YES result elsewhere. Then call the strategy to pick the one shard to move.
I think we've discussed this, but maybe it was discarded?
I think this prepares us better for multiple deciders saying not-preferred.
Decision canRemain = allocation.deciders().canRemain(shardRouting, routingNode, allocation); | ||
if ((canRemain.type() == Type.NOT_PREFERRED || canRemain.type() == Type.NO) == false) { | ||
return MoveDecision.remain(canRemain); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate on that argument about moving non-preferred first. I would naively think we want to ensure we move all hard-rules first - to vacate nodes - and then move the non-preferred after.
I think that also avoids this slightly confusing check.
Decision canRemain = allocation.deciders().canRemain(shardRouting, routingNode, allocation); | ||
if ((canRemain.type() == Type.NOT_PREFERRED || canRemain.type() == Type.NO) == false) { | ||
return MoveDecision.remain(canRemain); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll not that moveShards
should try to move shards to places where canAllocate
says YES over places where it says NOT_PREFERRED. Which seems to solve the sub-optimal shard movement issue?
} | ||
|
||
private Iterator<ShardRouting> createShardIterator() { | ||
final var shardWriteLoads = allocation.clusterInfo().getShardWriteLoads(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This ties in with my prior comment about actually calling canRemain
first.
private Iterator<ShardRouting> createShardIterator() { | ||
final var shardWriteLoads = allocation.clusterInfo().getShardWriteLoads(); | ||
final List<ShardRouting> sortedRoutings = new ArrayList<>(); | ||
double totalWriteLoad = 0; | ||
for (ShardRouting shard : routingNode) { | ||
Double shardWriteLoad = shardWriteLoads.get(shard.shardId()); | ||
if (shardWriteLoad != null) { | ||
sortedRoutings.add(shard); | ||
totalWriteLoad += shardWriteLoad; | ||
} | ||
} | ||
// TODO: Work out what this order should be | ||
// Sort by distance-from-mean-write-load | ||
double meanWriteLoad = totalWriteLoad / sortedRoutings.size(); | ||
sortedRoutings.sort(Comparator.comparing(sr -> Math.abs(shardWriteLoads.get(sr.shardId()) - meanWriteLoad))); | ||
return sortedRoutings.iterator(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we'd like to do that and I find that slightly harder with returning a list (though possibly doable).
…n_preferred_iteration
…n_preferred_iteration
Superseded by #135058 |
This is an attempt at implementing the
moveNonPreferred
option for dealing with moving hot-spots.I opted to make the iteration order pluggable instead of providing a pluggable prioritisation mechanism. It strikes me that given we can only make a single move at a time we probably want that to be cheap as possible.
If we work out all the shards that could move then ask which one to move, we do a lot of work up front for a single movement, especially in a large cluster. If instead (as in this PR) we iterate through the shards in priority order we can stop as soon as we find a move that we can make, having hopefully only assessed a few shards for
canRemain
and move-abililty.The
DefaultNonPreferredShardIteratorFactory
is very much over-fitted to the problem of moving shards off of hot-spotted nodes, we can make that more general when we have other reasons for beingNOT_PREFERRED
. It lazily populates the iterator, returning the shards from a single hot-spotted node at a time, this is because we take the first move-able shard then discard the rest of the iterator.